本文对地面农业机器人系统和应用进行了全面综述,并特别关注收获,涵盖研究,商业产品和结果及其能力技术。大多数文献涉及作物检测的发展,通过视觉及其相关挑战的现场导航。健康监测,产量估计,水状态检查,种子种植和清除杂草经常遇到任务。关于机器人收割,苹果,草莓,西红柿和甜辣椒,主要是出版物,研究项目和商业产品中考虑的农作物。据报道的收获农业解决方案,通常由移动平台,单个机器人手臂/操纵器和各种导航/视觉系统组成。本文回顾了报告的特定功能和硬件的发展,通常是运营农业机器人收割机所要求的;它们包括(a)视觉系统,(b)运动计划/导航方法(对于机器人平台和/或ARM),(c)具有3D可视化的人类机器人交流(HRI)策略,(d)系统操作计划&掌握策略和(e)机器人最终效果/抓手设计。显然,自动化农业,特别是通过机器人系统的自主收获是一个研究领域,它仍然敞开着,在可以做出新的贡献的地方提供了一些挑战。
translated by 谷歌翻译
Automatic differentiation (AD) is a technique for computing the derivative of a function represented by a program. This technique is considered as the de-facto standard for computing the differentiation in many machine learning and optimisation software tools. Despite the practicality of this technique, the performance of the differentiated programs, especially for functional languages and in the presence of vectors, is suboptimal. We present an AD system for a higher-order functional array-processing language. The core functional language underlying this system simultaneously supports both source-to-source forward-mode AD and global optimisations such as loop transformations. In combination, gradient computation with forward-mode AD can be as efficient as reverse mode, and the Jacobian matrices required for numerical algorithms such as Gauss-Newton and Levenberg-Marquardt can be efficiently computed.
translated by 谷歌翻译
A methodology is proposed, which addresses the caveat that line-of-sight emission spectroscopy presents in that it cannot provide spatially resolved temperature measurements in nonhomogeneous temperature fields. The aim of this research is to explore the use of data-driven models in measuring temperature distributions in a spatially resolved manner using emission spectroscopy data. Two categories of data-driven methods are analyzed: (i) Feature engineering and classical machine learning algorithms, and (ii) end-to-end convolutional neural networks (CNN). In total, combinations of fifteen feature groups and fifteen classical machine learning models, and eleven CNN models are considered and their performances explored. The results indicate that the combination of feature engineering and machine learning provides better performance than the direct use of CNN. Notably, feature engineering which is comprised of physics-guided transformation, signal representation-based feature extraction and Principal Component Analysis is found to be the most effective. Moreover, it is shown that when using the extracted features, the ensemble-based, light blender learning model offers the best performance with RMSE, RE, RRMSE and R values of 64.3, 0.017, 0.025 and 0.994, respectively. The proposed method, based on feature engineering and the light blender model, is capable of measuring nonuniform temperature distributions from low-resolution spectra, even when the species concentration distribution in the gas mixtures is unknown.
translated by 谷歌翻译
The combination of artist-curated scans, and deep implicit functions (IF), is enabling the creation of detailed, clothed, 3D humans from images. However, existing methods are far from perfect. IF-based methods recover free-form geometry but produce disembodied limbs or degenerate shapes for unseen poses or clothes. To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body. What we want is a method that combines the best properties of implicit and explicit methods. To this end, we make two key observations: (1) current networks are better at inferring detailed 2D maps than full-3D surfaces, and (2) a parametric model can be seen as a "canvas" for stitching together detailed surface patches. ECON infers high-fidelity 3D humans even in loose clothes and challenging poses, while having realistic faces and fingers. This goes beyond previous methods. Quantitative, evaluation of the CAPE and Renderpeople datasets shows that ECON is more accurate than the state of the art. Perceptual studies also show that ECON's perceived realism is better by a large margin. Code and models are available for research purposes at https://xiuyuliang.cn/econ
translated by 谷歌翻译
As a result of the ever increasing complexity of configuring and fine-tuning machine learning models, the field of automated machine learning (AutoML) has emerged over the past decade. However, software implementations like Auto-WEKA and Auto-sklearn typically focus on classical machine learning (ML) tasks such as classification and regression. Our work can be seen as the first attempt at offering a single AutoML framework for most problem settings that fall under the umbrella of multi-target prediction, which includes popular ML settings such as multi-label classification, multivariate regression, multi-task learning, dyadic prediction, matrix completion, and zero-shot learning. Automated problem selection and model configuration are achieved by extending DeepMTP, a general deep learning framework for MTP problem settings, with popular hyperparameter optimization (HPO) methods. Our extensive benchmarking across different datasets and MTP problem settings identifies cases where specific HPO methods outperform others.
translated by 谷歌翻译
Climate change is expected to aggravate wildfire activity through the exacerbation of fire weather. Improving our capabilities to anticipate wildfires on a global scale is of uttermost importance for mitigating their negative effects. In this work, we create a global fire dataset and demonstrate a prototype for predicting the presence of global burned areas on a sub-seasonal scale with the use of segmentation deep learning models. Particularly, we present an open-access global analysis-ready datacube, which contains a variety of variables related to the seasonal and sub-seasonal fire drivers (climate, vegetation, oceanic indices, human-related variables), as well as the historical burned areas and wildfire emissions for 2001-2021. We train a deep learning model, which treats global wildfire forecasting as an image segmentation task and skillfully predicts the presence of burned areas 8, 16, 32 and 64 days ahead of time. Our work motivates the use of deep learning for global burned area forecasting and paves the way towards improved anticipation of global wildfire patterns.
translated by 谷歌翻译
The current success of machine learning on image-based combustion monitoring is based on massive data, which is costly even impossible for industrial applications. To address this conflict, we introduce few-shot learning in order to achieve combustion monitoring and classification for the first time. Two algorithms, Siamese Network coupled with k Nearest Neighbors (SN-kNN) and Prototypical Network (PN), were tested. Rather than utilizing solely visible images as discussed in previous studies, we also used Infrared (IR) images. We analyzed the training process, test performance and inference speed of two algorithms on both image formats, and also used t-SNE to visualize learned features. The results demonstrated that both SN-kNN and PN were capable to distinguish flame states from learning with merely 20 images per flame state. The worst performance, which was realized by PN on IR images, still possessed precision, accuracy, recall, and F1-score above 0.95. We showed that visible images demonstrated more substantial differences between classes and presented more consistent patterns inside the class, which made the training speed and model performance better compared to IR images. In contrast, the relatively low quality of IR images made it difficult for PN to extract distinguishable prototypes, which caused relatively weak performance. With the entrire training set supporting classification, SN-kNN performed well with IR images. On the other hand, benefitting from the architecture design, PN has a much faster speed in training and inference than SN-kNN. The presented work analyzed the characteristics of both algorithms and image formats for the first time, thus providing guidance for their future utilization in combustion monitoring tasks.
translated by 谷歌翻译
我们建议使用两层机器学习模型的部署来防止对抗性攻击。第一层确定数据是否被篡改,而第二层解决了域特异性问题。我们探索三组功能和三个数据集变体来训练机器学习模型。我们的结果表明,聚类算法实现了有希望的结果。特别是,我们认为通过将DBSCAN算法应用于图像和白色参考图像之间计算的结构化结构相似性指数测量方法获得了最佳结果。
translated by 谷歌翻译
图像分类的深卷卷神经网络(CNN)依次交替交替进行卷积和下采样操作,例如合并层或陷入困境的卷积,从而导致较低的分辨率特征网络越深。这些降采样操作节省了计算资源,并在下一层提供了一些翻译不变性以及更大的接收领域。但是,这样做的固有副作用是,在网络深端产生的高级特征始终以低分辨率特征图捕获。逆也是如此,因为浅层总是包含小规模的特征。在生物医学图像分析中,工程师通常负责对仅包含有限信息的非常小的图像贴片进行分类。从本质上讲,这些补丁甚至可能不包含对象,而分类取决于图像纹理中未知量表的微妙基础模式的检测。在这些情况下,每一个信息都是有价值的。因此,重要的是要提取最大数量的信息功能。在这些考虑因素的推动下,我们引入了一种新的CNN体​​系结构,该体系结构可通过利用跳过连接以及连续的收缩和特征图的扩展来保留深,中间和浅层层的多尺度特征。使用来自胰腺导管腺癌(PDAC)CT扫描的非常低分辨率斑块的数据集,我们证明我们的网络可以超越最新模型的当前状态。
translated by 谷歌翻译
人类不断与日常对象互动以完成任务。为了了解这种相互作用,计算机需要从观察全身与场景的全身相互作用的相机中重建这些相互作用。由于身体和物体之间的阻塞,运动模糊,深度/比例模棱两可以及手和可抓握的物体零件的低图像分辨率,这是具有挑战性的。为了使问题可以解决,社区要么专注于互动的手,忽略身体或互动的身体,无视双手。 Grab数据集解决了灵活的全身互动,但使用基于标记的MOCAP并缺少图像,而行为则捕获了身体对象互动的视频,但缺乏手动细节。我们使用参数全身模型SMPL-X和已知的对象网格来解决一种新的方法,该方法与Intercap的先前工作局限性,该方法是一种新的方法,可重建从多视图RGB-D数据进行交互的整体和对象。为了应对上述挑战,Intercap使用了两个关键观察:(i)可以使用手和物体之间的接触来改善两者的姿势估计。 (ii)Azure Kinect传感器使我们能够建立一个简单的多视图RGB-D捕获系统,该系统在提供合理的相机间同步时最小化遮挡的效果。使用此方法,我们捕获了Intercap数据集,其中包含10个受试者(5名男性和5个女性)与10个各种尺寸和负担的物体相互作用,包括与手或脚接触。 Intercap总共有223个RGB-D视频,产生了67,357个多视图帧,每个帧包含6个RGB-D图像。我们的方法为每个视频框架提供了伪真正的身体网格和对象。我们的Intercap方法和数据集填补了文献中的重要空白,并支持许多研究方向。我们的数据和代码可用于研究目的。
translated by 谷歌翻译